How adolescents can inform new social media policies by donating their feeds for research
Author: Megan Wood - Senior Research Fellow.
Moral panic is nothing new. As televisions proliferated throughout homes in the 1950s, fears grew that children’s minds were being corrupted by harmful content and subliminal messaging, transforming them into hordes of unthinking and unfeeling zombies. Fast-forward to 2026, and the hysteria surrounding social media is strikingly familiar. In 2024, the Oxford Dictionary’s Word of the Year was brain rot; in 2025, terms such as parasocial, rage bait and slop followed – reflecting just how deeply these anxieties have seeped into everyday language.
But here’s the plot-twist, unlike television, social media is being banned for under-16s in Australia, with Norway, Denmark, Malaysia, and France preparing their own policies. All this, despite little scientific evidence about its harms.
Perhaps we shouldn’t blame governments for taking such an extreme approach. More than 75% of parents support the ban, and it would be politically unwise to wait the decade it will take for research to catch up. Social media platforms are also acting suspiciously. Meta has been accused of multiple coverups, withholding findings from internal studies of how Instagram and Facebook affect teenagers’ mental health. Something doesn’t quite add up.
That’s not to say there isn’t any evidence in the public domain. But much of it starts with a deceptively simple question: ‘how much time do you spend on TikTok, Snapchat, Instagram, or YouTube?’ Our team has done this too, using questionnaire data from Born in Bradford: Age of Wonder. Like many other studies, we found that spending more time on social media was associated with poorer mental health in adolescents.
But it’s really hard to answer accurately. Have a go yourself. How much time do you spend each week on your favourite app? Now check against what your smartphone says. If you are anything like the average teenager, you’ll have overestimated by about 7 hours per week. It’s even harder when usage patterns are fragmented across multiple platforms: a quick scroll on TikTok here, a check-in on Snapchat there – it’s hard to keep track.
Biases aside, self-reported screentime estimates provide only VHS-quality insights into teenagers’ social media habits. They’re unable to capture what they’re looking at or when they’re looking at it. A 3-minute makeup tutorial is likely to have a very different effect on wellbeing compared to three minutes encouraging self-harm. Likewise, the consequences of frequent late-night doomscrolling sessions are likely to vary from little and often social media check-ins throughout the day, even if the total duration is the same.
TV regulators understood this long ago. Harm wasn’t about what existed on screen, but who was watching, when, and for how long. Their solution? The 9pm watershed: a pragmatic scheduling fix, not a moral judgement or complete ban.
Social media regulation, by contrast, is being designed in near-complete ignorance of these patterns. We are asked to trust that platforms are in control. Indeed, TikTok confidently asserts they remove 99.9% of videos that violate their terms before they reach 100,000 views. This sounds impressive, but that still amounts to 190,000 videos slipping through the net, each being viewed more than 100,000 times in the space of three months [1].
Perhaps we shouldn’t be so harsh, they sift through more than 27billion videos every 90 days. But putting aside the 190,000 videos they know got through, how can anyone be sure the other 27billion aren’t in violation? What about the false negatives? Perhaps this is why, despite the Online Safety Act, teenagers can still encounter harmful content in as little as 10 minutes. The truth is, in an infinite sea of content delivered by personalised algorithms, the only person who knows what is in their social media feed is the individual subject to it. And given how unreliable we are at recalling any of it, how can we even begin to understand the scale of harmful content young people are subject to?
Enter, data donation. Every time we open a social media app, platform holders log what we do, how long for, and how we interact. Data donation leverages an individual’s legal right to request this data and share it with researchers through a secure, custom-built platform. Automated data cleaning strips all personal information and anything related to comments, likes, shares, or direct messages. We’re left with objective, time-stamped links showing what content was viewed and when. With this level of detail, the possibilities really open up. We can study usage patterns across the day, link to data on sleep, wellbeing, and other daily habits, and even use machine-learning tools to see what types of content people are engaging with.
At Born in Bradford – one of the UK’s largest population health research programmes for young people – we are trailblazing an ethical, youth-led approach to social media data donation, creating a blueprint for how this research can be done responsibly at scale. What makes this work different is not just the method, but the setting. By embedding data donation within Born in Bradford, we are building what will become one of the largest and most detailed studies of adolescent social media use anywhere in the world. Crucially, this work is grounded in a city-wide research infrastructure that already links a plethora of data about young people’s health and wellbeing – allowing us to move beyond snapshots and towards a genuinely contextual understanding of their online lives.
However, getting young people on board might be a challenge. Let’s be honest, handing over your social media data to researchers sounds a bit… intimidating – especially in an age of data privacy concerns. Would you do it? What if it was guaranteed it was done safely, securely, quickly, and without anyone prying into your private life, would you do it then? That’s exactly what we asked a panel of young people attending sixth form in Bradford…
There were several concerns and questions that were expressed by many of the young people:
“Will you know it’s me?”
All data are anonymised immediately, stripped of names, schools, or other identifying information. As researchers, we cannot link data back to individuals, only to other anonymised data we have (e.g., wellbeing and sleep patterns). We have no interest in any individual’s online habits. Instead, all participants’ data is combined to create a broad picture of social media consumption. This allows us to explore trends and patterns at a population level. For example, comparing different age groups, or those who don’t get as much sleep.
“What exactly are you using this for – and what are you going to do with it?”
Simply put, research. We look at patterns of use – when young people are online, how long they spend on different platforms, and what content they engage with. We can also explore how these patterns are related to other lifestyle factors, like wellbeing and sleep. Examples of some of our research questions include:
How much harmful content are young people exposed to online?
Do young people with anxiety and depression use social media differently?
What is the impact of late-night social media binges on wellbeing?
The insights we gather will provide strong scientific evidence to support policymakers and platform holders improve online safety. Then everyone can enjoy the benefits of social media without unnecessary risks.
“Are my parents, teachers, or the police going to see this?”
No. Parents, teachers, schools, and authorities will not be told anything about any individual’s feeds (even if it isn’t all as wholesome as dance trends and dog memes). The data the researchers will access will be anonymous, with no indication to whom the data belong. Young people made it clear they will not donate their data if there was any danger of real-life repercussions.
“What if someone consumes content encouraging self-harm, suicide, or extremism? Will this be reported?”
This research will not provide an opportunity to identify specific individuals who are vulnerable to risky behaviours following the consumption of such content. Names and other personal details will not be linked to donated data. Therefore, it will not be possible to identify specific individuals. The data we collect will not include good indicators of young people who are risk. We will only collect the content being watched, rather than text searches, comments, and messages. We do believe, however, that there is a moral imperative to conduct this research so we can understand the scale of these problems across young people. All donors will receive signposting to resources if they are upset by any harmful content they encounter online.
Accurate, reliable data is the bedrock of scientific research, an effective antidote, and answer to hysteria and outrage. Only after a decade of moral panic did the seminal study, Television in the Lives of Our Children, demonstrate T.V. was neither harmful nor beneficial in the general case. Instead, what mattered was the context: the child’s family, lifestyle, and what they watched. Today, everyone is preaching about the harms of social media, but are we watching a repeat where everyone knows what happens in the end, or a modern-day reboot with a shocking twist? Something certainly feels different. We’re going to have to look further than the Radio Times to find out what’s on, and there’s no 9 o’clock watershed as a safety net. Moral panic is easy; understanding is hard. Data donation lets us move beyond fear of the unknown, peek behind the curtain, and give parents, policymakers, and platforms the evidence they need to inform regulatory plans. Because when it comes to delivering what’s best for young people, blind panic won’t help, but knowledge and insight just might.
[1] Based on 190,000,000 videos removed due to content violation.

